15 research outputs found

    An elastic software architecture for extreme-scale big data analytics

    Get PDF
    This chapter describes a software architecture for processing big-data analytics considering the complete compute continuum, from the edge to the cloud. The new generation of smart systems requires processing a vast amount of diverse information from distributed data sources. The software architecture presented in this chapter addresses two main challenges. On the one hand, a new elasticity concept enables smart systems to satisfy the performance requirements of extreme-scale analytics workloads. By extending the elasticity concept (known at cloud side) across the compute continuum in a fog computing environment, combined with the usage of advanced heterogeneous hardware architectures at the edge side, the capabilities of the extreme-scale analytics can significantly increase, integrating both responsive data-in-motion and latent data-at-rest analytics into a single solution. On the other hand, the software architecture also focuses on the fulfilment of the non-functional properties inherited from smart systems, such as real-time, energy-efficiency, communication quality and security, that are of paramount importance for many application domains such as smart cities, smart mobility and smart manufacturing.The research leading to these results has received funding from the European Union’s Horizon 2020 Programme under the ELASTIC Project (www.elastic-project.eu), grant agreement No 825473.Peer ReviewedPostprint (published version

    mF2C: Towards a coordinated management of the IoT-fof-cloud continuum

    Get PDF
    Fog computing enables location dependent resource allocation and low latency services, while fostering novel market and business opportunities in the cloud sector. Aligned to this trend, we refer to Fog-tocloud (F2C) computing system as a new pool of resources, set into a layered and hierarchical model, intended to ease the entire fog and cloud resources management and coordination. The H2020 project mF2C aims at designing, developing and testing a first attempt for a real F2C architecture. This document outlines the architecture and main functionalities of the management framework designed in the mF2C project to coordinate the execution of services in the envisioned set of heterogeneous anddistributed resources.Postprint (author's final draft

    Ocular indicators of Alzheimer’s: exploring disease in the retina

    Get PDF

    Integrating Commercial Clouds into the WLCG

    No full text

    Profiling CPU-bound workloads on Intel Haswell-EP platforms

    No full text
    With the increasing adoption of public and private cloud resources to support the demands in terms of computing capacity of the WLCG, the HEP community has begun studying several benchmarking applications aimed at continuously assessing the performance of virtual machines procured from commercial providers. In order to characterise the behaviour of these benchmarks, in-depth profiling activities have been carried out. In this document we outline our experience in profiling one specific application, the ATLAS Kit Validation, in an attempt to explain an unexpected distribution in the performance samples obtained on systems based on Intel Haswell-EP processors

    Resource Visualization

    No full text
    Project Specification The goal of this project was to build a web based dashboard for visualizing CERN Openstack cloud resources. I worked on the project for 9 weeks, starting from 27 June, 2016 till 26 August, 2016. Members of the CERN Openstack team, cloud users, and visitors interested in understanding the architecture of CERN Openstack cloud were targeted as the primary audience for the project. The technologies and tools deployed to build the dashboard are HTML5, CSS, SVG, JavaScript, D3.JS, and Dimple. Abstract With over 7300 hypervisors in two data centers in the CERN Openstack cloud there is a need to easily visualize the current usage and allocations. My project was to investigate and prototype a service dashboard after collecting the topology information of the CERN cloud. Standard monitoring building blocks which assist in resource planning and visualization of Openstack cloud resources by the cloud administration team and WLCG resource management, were used for this purpose

    Consolidation of Cloud Computing in ATLAS

    No full text
    Throughout the first year of LHC Run 2, ATLAS Cloud Computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS Cloud Computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vac resources, streamlined usage of the High Level Trigger cloud for simulation and reconstruction, extreme scaling on Amazon EC2, and procurement of commercial cloud capacity in Europe. Building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems. Finally, a versatile analytics platform for data mining of log files is being used to analyze benchmark data and diagnose and gain insight on job errors

    Consolidation of cloud computing in ATLAS

    No full text
    Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems

    The Evolution of Cloud Computing in ATLAS

    No full text
    The ATLAS experiment has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology. In addition, completely new developments are needed to handle emerging requirements. This paper describes the overall evolution of cloud computing in ATLAS. The current status of the virtual machine (VM) management systems used for harnessing infrastructure as a service (IaaS) resources are discussed. Monitoring and accounting systems tailored for clouds are needed to complete the integration of cloud resources within ATLAS' distributed computing framework. We are developing and deploying new solutions to address the challenge of operation in a geographically distributed multi-cloud scenario, including a system for managing VM images across multiple clouds, a system for dynamic location-based discovery of caching proxy servers, and the usage of a data federation to unify the worldwide grid of storage elements into a single namespace and access point. The usage of the experiment's high level trigger (HLT) farm for Monte Carlo production, in a specialized cloud environment, is presented. Finally, we evaluate and compare the performance of commercial clouds using several benchmarks

    Managing the cloud continuum: lessons learnt from a real fog-to-cloud deployment

    Get PDF
    The wide adoption of the recently coined fog and edge computing paradigms alongsideconventional cloud computing creates a novel scenario, known as the cloud continuum, whereservices may benefit from the overall set of resources to optimize their execution. To operatesuccessfully, such a cloud continuum scenario demands for novel management strategies, enablinga coordinated and efficient management of the entire set of resources, from the edge up to thecloud, designed in particular to address key edge characteristics, such as mobility, heterogeneity andvolatility. The design of such a management framework poses many research challenges and hasalready promoted many initiatives worldwide at different levels. In this paper we present the resultsof one of these experiences driven by an EU H2020 project, focusing on the lessons learnt from a realdeployment of the proposed management solution in three different industrial scenarios. We thinkthat such a description may help understand the benefits brought in by a holistic cloud continuummanagement and also may help other initiatives in their design and development processes.This research was funded by H2020 mF2C Project, grant number 730929, and for UPCauthors by the Spanish Ministry of Science, Innovation and Universities and FEDER, grant numberRTI2018-094532-B-I00.Peer ReviewedPostprint (published version
    corecore